- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0002000000000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Gardner, Matt (2)
-
Gupta, Nitish (2)
-
Singh, Sameer (2)
-
Artzi, Yoav (1)
-
Basmov, Victoria (1)
-
Berant, Jonathan (1)
-
Bogin, Ben (1)
-
Chen, Sihao (1)
-
Dasigi, Pradeep (1)
-
Dua, Dheeru (1)
-
Elazar, Yanai (1)
-
Gottumukkala, Ananth (1)
-
Hajishirzi, Hannaneh (1)
-
Ilharco, Gabriel (1)
-
Khashabi, Daniel (1)
-
Lin, Kevin (1)
-
Liu, Jiangming (1)
-
Liu, Nelson F. (1)
-
Mulcaire, Phoebe (1)
-
Ning, Qiang (1)
-
- Filter by Editor
-
-
null (2)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
null (Ed.)The predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons. Prior work uses elaborate search strategies to mitigate the prevalence of spurious programs; however, they typically consider only one input at a time. In this work we explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs. We bias the program search (and thus the model’s training signal) towards programs that map the same phrase in related inputs to the same sub-parts in their respective programs. Additionally, we study the importance of designing logical formalisms that facilitate this kind of consistency-based training. We find that a more consistent formalism leads to improved model performance even without consistency-based training. When combined together, these two insights lead to a 10% absolute improvement over the best prior result on the Natural Language Visual Reasoning dataset.more » « less
-
Gardner, Matt; Artzi, Yoav; Basmov, Victoria; Berant, Jonathan; Bogin, Ben; Chen, Sihao; Dasigi, Pradeep; Dua, Dheeru; Elazar, Yanai; Gottumukkala, Ananth; et al (, Findings of Empirical Methods in Natural Language Processing)null (Ed.)
An official website of the United States government
